A Dual Semantic-Aware Recurrent Global-Adaptive Network for Vision-and-Language Navigation

Published in IJCAI, 2023

Recommended citation: Wang, Liuyi, et al. "A Dual Semantic-Aware Recurrent Global-Adaptive Network For Vision-and-Language Navigation." arXiv preprint arXiv:2305.03602 (2023). http://academicpages.github.io/files/DSRG.pdf

Abstract: Vision-and-Language Navigation (VLN) is a realistic but challenging task that requires an agent to locate the target region using verbal and visual cues. While significant advancements have been achieved recently, there are still two broad limitations: (1) The explicit information mining for significant guiding semantics concealed in both vision and language is still under-explored; (2) The previously structured map method provides the average historical appearance of visited nodes, while it ignores distinctive contributions of various images and potent information retention in the reasoning process. This work proposes a dual semantic-aware recurrent global-adaptive network (DSRG) to address the above problems. First, DSRG proposes an instruction-guidance linguistic module (IGL) and an appearance-semantics visual module (ASV) for boosting vision and language semantic learning respectively. For the memory mechanism, a global adaptive aggregation module (GAA) is devised for explicit panoramic observation fusion, and a recurrent memory fusion module (RMF) is introduced to supply implicit temporal hidden states. Extensive experimental results on the R2R and REVERIE datasets demonstrate that our method achieves better performance than existing methods.

Download paper here